Implementing AI Governance: from Framework to Practice 您所在的位置:网站首页 Guide to governance and management frameworks Implementing AI Governance: from Framework to Practice

Implementing AI Governance: from Framework to Practice

2024-05-15 06:35| 来源: 网络整理| 查看: 265

Key Takeaways:This article provides guidance in navigating existing frameworks and introduces how to get started in AI governance. We developed a process with actionable steps and specific documents you can request here to jump-start your strategy.AI governance is a system of rules, processes, frameworks, and tools within an organization to ensure the ethical and responsible development of AI.It is crucial to implement AI governance because of upcoming and existing regulations, PR, and societal risks. Additionally, it leads to better and more efficient development.Many frameworks are trying to define AI governance in theory and give practical implementation ideas. This article covers 5 frameworks by the IEEE, EU, Montreal, AIGA and NIST.AI governance is interdisciplinary and needs to be rooted in the AI Team, Development Team, Legal Team, Domain Experts / Users, and Customer Success Team.

As AI systems continue to become more prevalent and integral to various industries, concerns surrounding data privacy, algorithmic biases, and the impact of AI on decision-making processes have also been growing. In addition to the Gartner survey, which found that 41% of organizations in the U.S., U.K., and Germany have experienced an AI privacy breach or security incident, there have been numerous news articles and reports about the potential risks and negative effects of AI implementation. For example, the use of AI in hiring practices has been criticized for perpetuating biases and discrimination, while the use of AI in criminal justice systems has been found to be inaccurate and unfair in certain cases. As such, it is important for organizations to not only prioritize the ethical and responsible use of AI, but also to actively address and mitigate any potential risks and negative consequences that may arise.

AI governance is the key to enabling trustworthy, responsible, and efficient AI systems. Establishing governance frameworks as soon as organizations implement AI is crucial to ensure its development and use align with ethical and societal standards. In this post, we will explore what AI governance is, why it is important, and the frameworks that have been developed to guide its implementation. We will also discuss the process of putting these frameworks into practice and introduce a practice-tested framework for AI governance.

What is AI Governance and why should I care about it?

The central function of AI governance is to ensure the ethical and responsible development and use of AI. AI Governance is a system of rules, processes, frameworks, and technological tools that are employed in an organization to ensure that the use of AI aligns with the organizational principles, legal requirements, as well as social and ethical standards. It is part of the organization’s governance landscape and intertwines with IT governance, data governance and general governance.

AI governance enables organizations to unleash the full potential of AI, while mitigating risks like bias, discrimination, and privacy violations. The regulatory landscape keeps evolving quickly, with the upcoming EU AI Act, US AI Bill of Rights, and Chinese AI Regulation, which will make AI governance systems obligatory.

The regulation will cause organizational compliance overhead and huge fees if companies fail to prepare accordingly. This is one reason companies should set up an AI governance process today and ensure developed AI systems exceed regulation. Apart from that, AI governance not only makes AI fair and trustworthy but also increases the adoption of AI and improves business outcomes. The structure of the process enhances transparency, understanding, and resource allocation, which increases the efficiency of AI development.

Gartner expects that by 2026 organizations that operationalize AI transparency, trust, and security will see their AI models achieve a 50% result improvement in adoption, business goals, and user acceptance.

Start today and make AI governance your competitive advantage. Learn below how.

AI Governance Frameworks

Ethical principles like fairness must be translated into processes and tasks to make them actionable.

Even though standards for AI are mostly non-binding, several governance frameworks have been published to guide the development and use of AI. Some of the most prominent frameworks include:

The IEEE Global Initiative on Ethics of Autonomous and Intelligent SystemsThe European Union’s Ethics Guidelines for Trustworthy AI as base for the EU AI ActThe Montreal Declaration for Responsible AIThe AIGA AI Governance FrameworkNIST Artificial Intelligence Risk Management Framework

In the following, we give a rough overview of the core principles and distinguishing factors of these frameworks, all summarized again in the table below. If you are mainly interested in deriving actionable tips on how to set up your organization's governance process, feel free to skip to the next section!

The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems

The framework is a consortium of different standards, including specific documents, e.g. regarding system design, certification, and bias. This framework generally consists of eight principles: transparency, accountability, awareness of limitations, safety and well-being, reliability and dependability, equity, inclusivity, and privacy protection.

In addition to the eight principles, it also includes a set of metrics to assess the extent to which AI systems adhere to these principles. These metrics are designed to provide a standardized way of evaluating the ethical and responsible use of AI across different industries and applications. The metrics consider factors such as the level of transparency of the AI system, the degree to which the system is accountable to humans, and the measures in place to protect user privacy.

The European Union’s Ethics Guidelines for Trustworthy AI

In 2019, a High-level Expert Group (HLEG) developed guidelines on trustworthy AI that acted as a basis for the policy recommendations in preparation for the EU AI Act. They define trustworthy AI to be lawful, ethical, and robust.

The ethics guideline itself is based on seven key requirements that mostly overlap with the eight principles of the IEEE. It additionally introduces the concept of human oversight and enlarges the well-being requirement from societal to environmental.

The EU provides an Assessment List for Trustworthy Artificial Intelligence (ALTAI) online to support AI developers and deployers in developing Trustworthy AI. This list was also translated into a web-tool, accessible here.

The Montreal Declaration of Responsible AI

Similar to the two frameworks above, the Montreal Declaration for Responsible AI is also based on 10 principles. It largely covers the same areas, includes ecological responsibility as the EU, and additionally emphasizes democratic participation, respect for autonomy, as well as prudence during development.

These principles have resulted in 8 recommendations that provide guidelines for achieving the digital transition within the ethical framework of the declaration, like implementing audits and certifications, independent controlling organizations, ethics education for developing stakeholders, and empowerment of the user.

The AIGA AI Governance Framework

The framework developed by the University of Turku focuses on how to put ethical AI into practice and has a high focus on practical recommendations. It is aimed at supporting compliance with the upcoming EU AI Act.

Similarly to the other frameworks, it consists of 7 core principles: responsibility, transparency, explainability, accuracy and fairness, privacy and security, human control of technology, and professional responsibility. The differentiator is that those principles are translated into tasks along the AI lifecycle, focused on the AI system as a whole and the algorithms and data used. The goal is to give clear steps for using AI, from planning the design to monitoring the deployed systems.

NIST Artificial Intelligence Risk Management Framework

The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) has published this framework to help organizations manage the risks of AI. Like the AIGA, it aims to provide a flexible, structured, and measurable process to translate governance into practice.

It states four core functions, with governance being a cross-cutting function to the other three functions: map, measure, and manage AI risks. Each of these high-level functions is broken down into categories and subcategories containing specific actions and outcomes.

Even though it gives a deep understanding of actionable tasks, it doesn’t propose an ordered set of steps or a checklist of questions.



【本文地址】

公司简介

联系我们

今日新闻

    推荐新闻

    专题文章
      CopyRight 2018-2019 实验室设备网 版权所有